23 research outputs found

    Using Tuangou to reduce IP transit costs

    Get PDF
    A majority of ISPs (Internet Service Providers) support connectivity to the entire Internet by transiting their traffic via other providers. Although the transit prices per Mbps decline steadily, the overall transit costs of these ISPs remain high or even increase, due to the traffic growth. The discontent of the ISPs with the high transit costs has yielded notable innovations such as peering, content distribution networks, multicast, and peer-to-peer localization. While the above solutions tackle the problem by reducing the transit traffic, this paper explores a novel approach that reduces the transit costs without altering the traffic. In the proposed CIPT (Cooperative IP Transit), multiple ISPs cooperate to jointly purchase IP (Internet Protocol) transit in bulk. The aggregate transit costs decrease due to the economies-of-scale effect of typical subadditive pricing as well as burstable billing: not all ISPs transit their peak traffic during the same period. To distribute the aggregate savings among the CIPT partners, we propose Shapley-value sharing of the CIPT transit costs. Using public data about IP traffic of 264 ISPs and transit prices, we quantitatively evaluate CIPT and show that significant savings can be achieved, both in relative and absolute terms. We also discuss the organizational embodiment, relationship with transit providers, traffic confidentiality, and other aspects of CIPT

    Temporal rate limiting: Cloud elasticity at a flat fee

    Get PDF
    Abstract-In the current usage-based pricing scheme offered by most cloud computing providers, customers are charged based on the capacity and the lease time of the resources they capture (bandwidth, number of virtual machines, IOPS rate, etc.). Taking advantage of this pricing scheme, customers can implement auto-scaling purchase policies by leasing (e.g., hourly) necessary amounts of resources to satisfy a desired QoS threshold under their current demand. Auto-scaling yields strict QoS and variable charges. Some customers, however, would be willing to settle for a more relaxed statistical QoS in exchange for a predictable flat charge. In this work we propose Temporal Rate Limiting (TRL), a purchase policy that permits a customer to allocate optimally a specified purchase budget over a predefined period of time. TRL offers the same expected QoS with auto-scaling but at a lower, flat charge. It also outperforms in terms of QoS a naive flat charge policy that splits the available budget uniformly in time. We quantify the benefits of TRL analytically and also deploy TRL on Amazon EC2 and perform a live validation in the context of a "blacklisting" application for Twitter

    An analysis of the economic impact of strategic deaggregation

    Get PDF
    The work of Marcelo Bagnulo has been partially supported by project MASSES (TEC2012-35443) funded by the Spanish Ministry of Economy and Competitiveness (MINECO).The advertisement of more-specific prefixes provides network operators with a fine-grained method to control the interdomain ingress traffic. Prefix deaggregation is recognized as a steady long-lived phenomenon at the interdomain level, despite its well-known negative effects for the community. In this paper, we look past the original motivation for deploying deaggregation in the first place, and instead we focus on its aftermath. We identify and analyze here one particular side-effect of deaggregation regarding the economic impact of this type of strategy: decreasing the transit traffic bill. We propose a general Internet model to analyze the effect of advertising more-specific prefixes on the incoming transit traffic burstiness. We show that deaggregation combined with selective advertisements has a traffic stabilization side-effect, which translates into a decrease of the transit traffic bill. Next, we develop a methodology for Internet Service Providers (ISPs) to monitor general occurrences of prefix deaggregation within their customer base. Thus, the ISPs can detect selective advertisements of deaggregated prefixes, and thus identify customers which impact the business of their providers. We apply the proposed methodology on a complete set of data including routing, traffic, topological and billing information provided by a major Japanese ISP and we discuss the obtained results.Publicad

    COLTRANE: ConvolutiOnaL TRAjectory NEtwork for Deep Map Inference

    Full text link
    The process of automatic generation of a road map from GPS trajectories, called map inference, remains a challenging task to perform on a geospatial data from a variety of domains as the majority of existing studies focus on road maps in cities. Inherently, existing algorithms are not guaranteed to work on unusual geospatial sites, such as an airport tarmac, pedestrianized paths and shortcuts, or animal migration routes, etc. Moreover, deep learning has not been explored well enough for such tasks. This paper introduces COLTRANE, ConvolutiOnaL TRAjectory NEtwork, a novel deep map inference framework which operates on GPS trajectories collected in various environments. This framework includes an Iterated Trajectory Mean Shift (ITMS) module to localize road centerlines, which copes with noisy GPS data points. Convolutional Neural Network trained on our novel trajectory descriptor is then introduced into our framework to detect and accurately classify junctions for refinement of the road maps. COLTRANE yields up to 37% improvement in F1 scores over existing methods on two distinct real-world datasets: city roads and airport tarmac.Comment: BuildSys 201

    A non-recursive algorithm for polygon triangulation

    No full text
    In this paper an algorithm for the convex polygon triangulation based on the reverse Polish notation is proposed. The formal grammar method is used as the starting point in the investigation. This idea is "translated" to the arithmetic expression field enabling application of the reverse Polish notation method.

    Sharing the cost of backbone networks: cui bono?

    No full text
    We study the problem of how to share the cost of a backbone network among its customers. A variety of empirical cost-sharing policies are used in practice by backbone network operators but very little ever reaches the research literature about their properties. Motivated by this, we present a systematic study of such policies focusing on the discrepancies between their cost allocations. We aim at quantifying how the selection of a particular policy biases an operator's understanding of cost generation. We identify F-discrepancies due to the specific function used to map traffic into cost (e.g., volume vs. peak rate vs. 95-percentile) and M-discrepancies, which have to do with where traffic is metered (per device vs. ingress metering). We also identify L-discrepancies relating to the liability of individual customers for triggered upgrades and consequent costs (full vs. proportional), and finally, TCO-discrepancies emanating from the fact that the cost of carrying a bit is not uniform across the network (old vs. new equipment, high vs. low energy or real estate costs, etc.). Using extensive traffic, routing, and cost data from a tier-1 network we show that F-discrepancies are large when looking at individual links but cancel out when considering network-wide cost-sharing. Metering at ingress points is convenient but leads to large M-discrepancies, while TCO-discrepancies are huge. Finally, L-discrepancies are intriguing and esoteric but understanding them is central to determining the cost a customer inflicts on the networ
    corecore